Partial Oblique Projection Learning for Optimal Generalization

نویسندگان

  • LIU Benyong
  • ZHANG Jing
چکیده

In practice, it is necessary to implement an incremental and active learning for a learning method. In terms of such implementation, this paper shows that the previously discussed S-L projection learning is inappropriate to constructing a family of projection learning, and proposes a new version called partial oblique projection (POP) learning. In POP learning, a function space is decomposed into two complementary subspaces, so that functions belonging to one of the subspaces can be completely estimated in noiseless case; while in noisy case, the dispersions are set to be the smallest. In addition, a general form of POP learning is presented and the results of a simulation are given.

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

On Oblique Random Forests

Abstract. In his original paper on random forests, Breiman proposed two different decision tree ensembles: one generated from “orthogonal” trees with thresholds on individual features in every split, and one from “oblique” trees separating the feature space by randomly oriented hyperplanes. In spite of a rising interest in the random forest framework, however, ensembles built from orthogonal tr...

متن کامل

Constructive updating/downdating of oblique projectors: a generalization of the Gram-Schmidt process

A generalization of the Gram-Schmidt procedure is achieved by providing equations for updating and downdating oblique projectors. The work is motivated by the problem of adaptive signal representation outside the orthogonal basis setting. The proposed techniques are shown to be relevant to the problem of discriminating signals produced by different phenomena when the order of the signal model n...

متن کامل

Compressive Reinforcement Learning with Oblique Random Projections

Compressive sensing has been rapidly growing as a non-adaptive dimensionality reduction framework, wherein high-dimensional data is projected onto a randomly generated subspace. In this paper we explore a paradigm called compressive reinforcement learning, where approximately optimal policies are computed in a lowdimensional subspace generated from a high-dimensional feature space through rando...

متن کامل

A Continuous-time Approach to the Oblique Procrustes Problem

In the paper proposed we will make use of the gradient flow approach to consider a generalization of the well-known oblique Procrustes rotation problem, involving oblique simple structure rotation of both the core and component matrices resulting from three-mode factor analysis. The standard oblique Procrustes rotations to specified factor-structure and factor-pattern follw as special cases. Th...

متن کامل

A Computational Study of Incremental Projection Learning in Neural Networks

One of the essences of supervised learning in neural network is generalization capability. It is an ability to give an accurate result for data that are not learned in learning process. One of supervised learning method that theoretically guarantees the optimal generalization capability is projection learning. The method was formulated as inverse problem from functional analytic point of view i...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2007